DEMAND THAT THE FOR-THE-PEOPLE ACT BE PASSED BY CONGRESS TO END THIS CORRUPTION

The 2020 election is shaping up to be the most expensive in American history. Ten years after Citizens United, our elections are awash with the money of mega-donors and corporations that drown out the voices of everyday Americans — with serious consequences for racial and economic justice.

An antidote is sitting on Mitch McConnell’s desk, having passed the House a year ago and still not been put up for a vote in the Senate. H.R. 1, the For the People Act, would blunt the distorting influence of big money in politics. It could transform who runs for office, who wins, and what issues get prioritized in Congress.

Elections have become extravagantly costly. In 2016, campaign spending by or in support of Donald Trump and Hillary Clinton totaled $2.4 billion. The average Senate winner in 2018 spent $15.7 million, with challengers needing on average $23.8 million to topple incumbents. Even local election costs can be forbidding. Spending in the Los Angeles County school board primary last month topped $6 million.

Although occasionally a candidate like Bernie Sanders has strong enough national appeal to raise the money to compete from a broad network of small donors, that’s the exception. The vast majority have to rely on a class of large donors to be viable.

Less than 1 percent of the population provides the majority of campaign funds. Indeed, just 25 people pumped over $600 million into the 2016 federal elections.

That donor class looks nothing like America. The compounding of historical racial subordination and ongoing discrimination has given us an economy in which the Forbes 400 billionaires have as much wealth as the entire Black population and a quarter of the Latino population combined. Today, the top 1 percent are more than 90 percent white; the top 10 percent are 85-90 percent white. These are the groups that dominate political giving in America.

Dēmos’s analysis of campaign finance records bears this out. Ninety-two percent of federal election donors in 2014 and 91 percent of donors in 2012 were white. The numbers are even more skewed among large donors. Ninety-four percent of those giving more than $5,000 in 2014 and 93 percent in 2012 were white.

What’s the consequence of a political system dependent on an overwhelmingly white donor class? The perpetuation of racial inequality.

First, the big money system is a barrier to entry for Black and brown candidates. Studies show they’re less likely to have networks of rich friends and business associates, making it difficult for them to survive the “wealth primary” where donors filter the candidate pool before a single vote is cast. When candidates of color do run, they raise on average 47 percent less than their white counterparts. White candidates are also far more likely to be in a position to self-fund their campaigns. This is a big reason 90 percent of our elected officials are white, even though 37 percent of us are people of color.

Second, the policy preferences of the donor class are far out of step with those of the general public, and particularly of people of color.

On economic policy, for example, polls show that people of color support the role of government in reducing inequality at significantly higher rates (67 percent) than do people earning over $100,000 a year (53 percent). People of color are also more likely to list job creation and affordable college as their economic priorities, whereas the wealthy are more likely to cite lower taxes and deficit reduction.

The challenge is that the Supreme Court has invalidated commonsense campaign finance protections time and again. It has struck down reasonable contribution limits and restrictions on self-financing, allowed the rise of SuperPACs, and greenlit wealthy individuals pumping millions into the system.

H.R. 1 contains innovative programs that would stay within the lines drawn by the Court and still curb the harmful influence of big money.

The most significant is a public financing system for congressional candidates that would match small-donor contributions — those under $200 — at a rate of 6:1. In this way, a $20 donation would become $140, a $200 donation $1,400. The cost of the program is reasonable — one estimate is $3 billion over 10 years, or $1 per citizen per year. The bill also creates a pilot program of $25 “My Choice” vouchers for people to give to congressional candidates they support.

These programs would amplify the voices of people currently being drowned out by big money. It would offer a path for congressional and presidential candidates to rely on donations from everyday people, not wealthy donors. Similar public financing programs in New York City and Arizona and a voucher program in Seattle have diversified the donor pool and allowed more candidates of color to run. These programs can help produce more equitable public policy. For example, the advent of public financing in Connecticut was crucial to breaking a legislative logjam and becoming the first state in the nation to guarantee paid sick leave.

Countering the undue influence of big money in our elections is a civil rights issue. H.R. 1, the For the People Act, would be a giant step forward. The American people should demand that it become law. It would end the Solyndra and Tesla dirty payola programs that are destroying America!

An entire generation is losing faith in American capitalism. Widening inequality and declining mobility have led to an erosion of trust in the system. In a 2018 Gallup survey, only 45 percent of young adults said they supported capitalism. Fifty-one percent supported socialism.

These numbers are stark, and so are the failures that underlie them, but history suggests that the failures can be addressed. Inequality has been high before, and American society found ways to reduce it; opportunity, too, can be widened by smart public choices. Fixing the system will not be easy, but we have the tools we need, if we can find the political will to use them.

Capitalism faces another threat, however, and it may prove more fundamental: Americans’ growing reliance on technologies—smartphones, social media, gaming consoles, shopping sites—that have become predatory and are quickly becoming more so. These gadgets and platforms have been integrated into nearly everything we do. Reaching for your phone to read a text, peruse your Instagram feed, or play a round of Candy Crush has become second nature, an involuntary response to even the shortest bout of boredom. This reliance—addiction is a better word for it—is undermining basic tenets of the American economic model.

In a well-functioning market, consumers have the freedom to act in their own self-interest and to maximize their own well-being. Prices are transparent, and people have a basic level of trust that exchanges of goods, services, and money benefit all parties. Consumers, it is assumed, are discerning and rational in the face of the market’s blandishments—an assumption that is crucial to the whole system’s ability to produce social good. Of course, markets have never functioned in the real world exactly as they do in economics textbooks. But in the U.S., the system has tended to work, allocating resources efficiently, generating growth, and improving the living conditions and welfare of most people.

But the new powers in the digital age have built their business models on strategies—enabled and turbocharged by self-improving algorithms—that actively undermine the principles that make capitalism a good deal for most people. Their aim is not merely to gain and retain customers, but to create a dependency on their products.

Carmakers, appliance manufacturers, and cosmetics conglomerates have always been happy to prey upon their customers’ desires and insecurities if doing so might stoke an irrational desire to buy their products. But their methods—advertising, primarily—are crude compared with the sophisticated tactics available to today’s tech giants. The buzzes, badges, and streaks of social media; the personalized “deals” of commerce sites; the camaraderie and thrilling competition of gaming; the algorithmic precision of the recommendations on YouTube—all have been finely tuned to keep us coming back for more. And we are: The average person taps, types, swipes, and clicks on his smartphone 2,617 times a day. Ninety-three percent of people sleep with their devices within arm’s reach. Seventy-five percent use them in the bathroom.

The sway these technologies have over us is unhealthy, and the ways in which they can worsen our social relationships and our discourse are worthy subjects of public concern. But addiction to technology poses another threat, too. When we are too hooked on our phones and feeds to make decisions that align with our own self-interest, the free market ceases to be free.

Where an affinity ends and addiction begins is not always clear, but when it comes to our relationships with technology, the signs of addiction are manifest. We are spending more and more hours online, forgoing time with loved ones. Deprived of a decent Wi-Fi connection, we grow irritable. We risk life and limb to send texts from the road. In a 2019 Common Sense Media survey of 500 parents, 45 percent confessed to feeling at least somewhat addicted to their phone. Among parents whose children had their own phone, 47 percent said they believed that their kids were addicted too.

Many technology companies engineer their products to be habit-forming. A generation of Silicon Valley executives trained at the Stanford Behavior Design Lab in the Orwellian art of manipulating the masses. The lab’s founder, the experimental psychologist B. J. Fogg, has isolated the elements necessary to keep users of an app, a game, or a social network coming back for more. One former student, Nir Eyal, distilled the discipline in Hooked: How to Build Habit-Forming Products, an influential manual for developers. In it, he describes the benefits of enticements such as “variable rewards”—think of the rush of anticipation you experience as you wait for your Twitter feed to refresh, hoping to discover new likes and replies. Introducing such rewards to an app or a game, Eyal writes approvingly, “suppresses the areas of the brain associated with judgment and reason while activating the parts associated with wanting and desire.” Indeed, that brief lag between refresh and reveal is not Twitter crunching data—it’s an intentional delay written into the code, designed to elicit the response Eyal describes.

A growing chorus of critics is warning of the dangers inherent in such manipulation. Tristan Harris, a former technology designer at Google—and another former student of Fogg’s—is a co-founder of the Center for Humane Technology. Harris has likened his iPhone to having “a slot machine in my pocket,” and indeed many of its features mimic those of the most addictive games on any casino floor.

Harris has worked to reveal the tactics companies use to keep us hooked. On YouTube, for example, the auto-play function deprives viewers of a natural moment at which to disengage. But it’s not just that the site keeps queuing up new clips for you to watch. YouTube’s algorithms are designed to hold your interest by serving up content you can’t resist, and the algorithms have gotten very good. As of 2017, users were watching a collective 1 billion hours of YouTube videos a day, more than 70 percent of which had been served to us in the form of algorithmic recommendations. Pause over that number for a moment: Nearly three-quarters of the YouTube videos we’re watching have been fed to us.

The advent of addiction as the business model of some of the country’s largest companies—companies with which many Americans interact every day—has fundamentally shifted the balance of power between consumers and producers. This was not always the most likely outcome of the digital revolution. In many facets of our lives, technology has improved transparency and given potential buyers access to a wealth of information they previously lacked. In the analog age, a car shopper would have little more than the Kelley Blue Book—and his own time and willingness to kick tires—to guide him to the best deal. Some of us appreciate that the Instagram algorithm knows whether we are 16 or 60 and whether we prefer Timberland or Tory Burch, and markets to us accordingly.

But the more reliant we become on a given app or platform, the more opportunities its makers have to observe our behavior—and the better they understand our behavior, the better they become at manipulating it to their own ends, whether their business model is serving ads or selling to us directly. It’s a virtuous cycle for the producers, and a vicious one for the consumers. Often, we barely recognize that we’re participating in it, because the barriers to participation are so low. Many of the most addictive platforms lure us in with the promise of a free service. But Snapchat, TikTok, and Twitch can be considered free only if we decide that our time, and the personal information we’re providing, have no value.

Digital life, we must remember, is still in its infancy, and the powers of the corporations that govern that life are still growing. Companies are studying what we search for, what nudges we respond to, and what times of day we engage in certain online behaviors. Soon, cameras and sensors will likely be tracking what frightens, amuses, and arouses us, allowing data collectors to know more about us than we perhaps even know about ourselves. (The Wall Street Journal has reported that popular iPhone apps that track users’ heart rate and menstrual cycle were passing that information to Facebook, though the social network denied using the information to its advantage.)

The suggestion that we need to be protected from such tactics might seem paternalistic, and if consumers were the rational actors who populate econ textbooks, it might be: A person could decide for herself whether to exchange some amount of privacy for the joy of viewing friends’ photos or the convenience of tracking her heart rate. But the addiction economy relies on an asymmetrical exchange of information. Users are expected to blithely surrender their private information for access to services. The data collectors, meanwhile, fiercely guard their own privacy, typically refusing to disclose what information they have, whom they sell it to, and how they use it to manipulate our behavior.

And they do, in fact, manipulate our behavior. As Harvard Business School’s Shoshana Zuboff has noted, the ultimate goal of what she calls “surveillance capitalism” is to turn people into marionettes. In a recent New York Times essay, Zuboff pointed to the wild success of Pokémon Go. Ostensibly a harmless game in which players use smartphones to stalk their neighborhoods for the eponymous cartoon creatures, the app relies on a system of rewards and punishments to herd players to McDonald’s, Starbucks, and other stores that pay its developers for foot traffic. In the addiction economy, sellers can induce us to show up at their doorstep, whether they sell their wares from a website or a brick-and-mortar store. And if we’re not quite in the mood to make a purchase? Well, they can manipulate that, too. As Zuboff noted in her essay, Facebook has boasted of its ability to subliminally alter our moods.

The company has denied accusations that it uses this power to sell targeted ads; others, however, will surely take advantage of our vulnerabilities. Consider “drunk shopping,” a bad habit Americans have acquired in the age of the Buy It Now button: Various surveys have suggested that it is already a multibillion-dollar phenomenon. It’s not difficult to imagine any number of technology platforms determining when we’re likely to be tipsy—or discerning it from a slur in our speech or typos in our texts—and using that information to time their pitch.

Companies are also leveraging our reliance on them—and their knowledge of us—to get us to pay more for their products. By tracking our purchasing patterns (what we will shell out for an airline upgrade; how sensitive we are to surge pricing), they can make offers based on what each individual is willing to pay rather than what the market will bear. One study found that the price of headphones displayed in Google search results varied depending on users’ web history, with prices going up—by a factor of four—when past searches suggested affluence. Another study, by the Brandeis economist Benjamin Reed Shiller, found that while a seller with access to basic demographic information about a specific buyer can gain 0.3 percent more profit than the market price would produce, a seller with access to an individual’s browsing history can increase profit by 14.6 percent.

Here, too, a fundamental benefit of capitalism is threatened. Traditionally, buyers have benefited from what economists call consumer surplus—the difference between what we would pay for a good and what sellers actually charge. With their newfound information advantage, sellers can retain far more of that surplus for themselves. Whether or not the average American understands the concept of consumer surplus, individualized pricing violates a sense of fairness: We’ve long assumed—but can assume no longer—that the price you pay is the price I pay.

None of this is an argument against progress. Technology has helped create a world of convenience and abundance, and it will continue to do so. Properly channeled, it can improve the functioning of a market economy. But for society to harness technology’s potential, we must understand how it is reshaping our lives.

In the past, we may not have entirely trusted General Motors or General Electric, but most people didn’t believe they were warping our desires or robbing us of our time and agency. By contrast, the biggest, best-known companies in the contemporary American economy—Facebook, Amazon, Google—are now viewed with growing suspicion and mixed emotions. A Pew survey found that the percentage of Americans who think technology companies have a net positive impact on the country had fallen from 71 percent in 2015 to 50 percent in 2019. In part, such sentiments flow from the dawning realization that these and other tech behemoths have hooked us on their services in order to profit from us. But we’re also beginning to recognize the scale of the time we’ve lost. We’re dismayed with how we’re spending our days, but feel powerless to abandon our new bad habits, as anyone who has deleted, then reinstalled, the Facebook app can attest.

Will these discontents push people toward revolutionary backlash? Perhaps not. But that’s almost beside the point. The capitalism that is taking shape in this century—predatory, manipulative, extremely effective at short-circuiting our rationality—is a different beast from the classical version taught in university classrooms. It cannot be regarded as beneficent and should not be given the benefit of the doubt. Profit motive and the means to create dependency is too dangerous a combination.

American society has long treated habit-forming products differently from non-habit-forming ones. The government restricts the age at which people can buy cigarettes and alcohol, and dictates places where they can be consumed. Until recently, gambling was illegal in most places, and closely regulated. But Big Tech has largely been left alone to insinuate addictive, potentially harmful products into the daily lives of millions of Americans, including children, by giving them away for free and even posturing as if they are a social good. The most addictive new devices and apps may need to be put behind the counter, as it were—packaged with a stern warning about the dangers inherent in their use, and sold only to customers of age.

Perhaps the most immediate and important change we can make is to introduce transparency—and thus, trust—to exchanges in the technological realm. At present, many of the products and services with the greatest power to manipulate us are “free,” in the sense that we don’t pay to use them. But we are paying, in the form of giving up private data that we have not learned to properly value and that will be used in ways we don’t fully understand. We should start paying for platforms like Facebook with our dollars, not our data.

So far there is no better system than market-based capitalism to balance freedom, fairness, efficient allocation of goods, and growth. Given the fondness for free markets that tends to dominate among Silicon Valley executives, tech innovators ought to tread carefully if they want that system to survive.